3 research outputs found
Principles of sensorimotor control and learning in complex motor tasks
The brain coordinates a continuous coupling between perception and action in the presence of uncertainty and incomplete knowledge about the world. This mapping is enabled by control policies and motor learning can be perceived as the update of such policies on the basis of improving performance given some task objectives. Despite substantial progress in computational sensorimotor control and empirical approaches to motor adaptation, to date it remains unclear how the brain learns motor control policies while updating its internal model of the world.
In light of this challenge, we propose here a computational framework, which employs error-based learning and exploits the brain’s inherent link between forward models and feedback control to compute dynamically updated policies. The framework merges optimal feedback control (OFC) policy learning with a steady system identification of task dynamics so as to explain behavior in complex object manipulation tasks. Its formalization encompasses our empirical findings that action is learned and generalised both with regard to a body-based and an object-based frame of reference. Importantly, our approach predicts successfully how the brain makes continuous decisions for the generation of complex trajectories in an experimental paradigm of unfamiliar task conditions. A complementary method proposes an expansion of the motor learning perspective at the level of policy optimisation to the level of policy exploration. It employs computational analysis to reverse engineer and subsequently assess the control process in a whole body manipulation paradigm.
Another contribution of this thesis is to associate motor psychophysics and computational motor control to their underlying neural foundation; a link which calls for further advancement in motor neuroscience and can inform our theoretical insight to sensorimotor processes in a context of physiological constraints. To this end, we design, build and test an fMRI-compatible haptic object manipulation system to relate closed-loop motor control studies to neurophysiology. The system is clinically adjusted and employed to host a naturalistic object manipulation paradigm on healthy human subjects and Friedreich’s ataxia patients. We present methodology that elicits neuroimaging correlates of sensorimotor control and learning and extracts longitudinal neurobehavioral markers of disease progression (i.e. neurodegeneration).
Our findings enhance the understanding of sensorimotor control and learning mechanisms that underlie complex motor tasks. They furthermore provide a unified methodological platform to bridge the divide between behavior, computation and neural implementation with promising clinical and technological implications (e.g. diagnostics, robotics, BMI).Open Acces
F2move: fMRI-compatible haptic object manipulation system for closed-loop motor control studies
Functional neuroimaging plays a key role in addressing open questions in systems and motor neuroscience directly applicable to brain machine interfaces. Building on our low-cost motion capture technology (fMOVE), we developed f2MOVE, an fMRI-compatible system for 6DOF goal-directed hand and wrist movements of human subjects enabling closed-loop sensorimotor haptic experiments with simultaneous neuroimaging. f2MOVE uses a high-zoom lens high frame rate camera and a motion tracking algorithm that tracks in real-time the position of special markers attached to a hand-held object in a novel customized haptic interface. The system operates with high update rate (120 Hz) and sufficiently low time delays (<; 20 ms) to enable visual feedback while complex, goal-oriented movements are recorded. We present here both the accuracy of our motion tracking against a reference signal and the efficacy of the system to evoke motor control specific brain activations in healthy subjects. Our technology and approach thus support the real-time, closed-loop study of the neural foundations of complex haptic motor tasks using neuroimaging
Recommended from our members
A predictive processing model of episodic memory and time perception
Human perception and experience of time is strongly influenced by ongoing stimulation, memory of past experiences, and required task context. When paying attention to time, time experience seems to expand; when distracted, it seems to contract. When considering time based on memory, the experience may be different than in the moment, exemplified by sayings like “time flies when you’re having fun”. Experience of time also depends on the content of perceptual experience – rapidly changing or complex perceptual scenes seem longer in duration than less dynamic ones. The complexity of interactions between attention, memory, and perceptual stimulation is a likely reason that an overarching theory of time perception has been difficult to achieve. Here, we introduce a model of perceptual processing and episodic memory that makes use of hierarchical predictive coding, short-term plasticity, spatio-temporal attention, and episodic memory formation and recall, and apply this model to the problem of human time perception. In an experiment with ∼ 13, 000 human participants we investigated the effects of memory, cognitive load, and stimulus content on duration reports of dynamic natural scenes up to ∼ 1 minute long. Using our model to generate duration estimates, we compared human and model performance. Model-based estimates replicated key qualitative biases, including differences by cognitive load (attention), scene type (stimulation), and whether the judgement was made based on current or remembered experience (memory). Our work provides a comprehensive model of human time perception and a foundation for exploring the computational basis of episodic memory within a hierarchical predictive coding framework